22 research outputs found

    Bayesian optimization for parameter identification on a small simulation budget

    No full text
    International audienceBayesian optimization uses a probabilistic model of the objective function to guide the search for the optimum. It is particularly interesting for the optimization of expensive-to-evaluate functions. For the last decade, it has been increasingly used for industrial optimization problems and especially for numerical design involving complex computer simulations. We feel that Bayesian optimization should be considered with attention by anyone who has to identify the parameters of a model based on a very limited number of model simulations because of model complexity. In this paper, we wish to describe, as simply as possible, how Bayesian optimization can be used in parameter identification and to present a new application. We concentrate on two algorithms, namely EGO (for Efficient Global Optimization) and IAGO (for Informational Approach to Global Optimization), and describe how they can be used for parameter identification when the budget for evaluating the cost function is severely limited. Some open questions that must be addressed for theoretical and practical reasons are indicated

    An informational approach to the global optimization of expensive-to-evaluate functions

    Full text link
    In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizer entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on \emph{stepwise uncertainty reduction}, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the \emph{Efficient Global Optimization} (EGO) algorithm. An empirical comparison is carried out between our criterion and \emph{expected improvement}, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization) is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.Comment: Accepted for publication in the Journal of Global Optimization (This is the revised version, with additional details on computational problems, and some grammatical changes

    Global optimization of expensive-to-evaluate functions: an empirical comparison of two sampling criteria

    No full text
    In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each of these evaluations usefully contributes to the localization of good candidates for the role of global minimizer, a stochastic model of the function can be built to conduct a sequential choice of evaluation points. Based on Gaussian processes and Kriging, the authors have recently introduced the informational approach to global optimization (IAGO) which provides a onestep optimal choice of evaluation points in terms of reduction of uncertainty on the location of the minimizers. To do so, the probability density of the minimizers is approximated using conditional simulations of the Gaussian process model behind Kriging. In this paper, an empirical comparison between the underlying sampling criterion called conditional minimizer entropy (CME) and the standard expected improvement sampling criterion (EI) is presented. Classical tests functions are used as well as sample paths of the Gaussian model and an actual industrial application. They show the interest of the CME sampling criterion in terms of evaluation savings

    Global optimization based on noisy evaluations: an empirical study of two statistical approaches

    No full text
    International audienceThe optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (EI) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for EI and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than EI and various modifications of EI designed to deal with noisy evaluations

    Sequential design of computer experiments for the estimation of a probability of failure

    Full text link
    This paper deals with the problem of estimating the volume of the excursion set of a function f:RdRf:\mathbb{R}^d \to \mathbb{R} above a given threshold, under a probability measure on Rd\mathbb{R}^d that is assumed to be known. In the industrial world, this corresponds to the problem of estimating a probability of failure of a system. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited and therefore classical Monte Carlo methods ought to be avoided. One of the main contributions of this article is to derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. These sequential strategies use a Gaussian process model of ff and aim at performing evaluations of ff as efficiently as possible to infer the value of the probability of failure. We compare these strategies to other strategies also based on a Gaussian process model for estimating a probability of failure.Comment: This is an author-generated postprint version. The published version is available at http://www.springerlink.co

    Optimisation de fonctions coûteuses<br />Modèles gaussiens pour une utilisation efficace du budget d'évaluations : théorie et pratique industrielle

    No full text
    This dissertation is driven by a question central to many industrial optimization problems : how to optimize a function when the budget for its evaluation is severely limited by either time or cost ? For example, when optimization relies on computer simulations, each taking several hours, the dimension and complexity of the optimization problem may seem irreconcilable with the evaluation budget (typically thirty parameters to be optimized with less than one hundred evaluations). This work is devoted to optimization algorithms dedicated to this context, which is out of range for most classical methods. The common principle of the methods discussed is to use Gaussian processes and Kriging to build a cheap proxy for the function to be optimized. This approximation is then used iteratively to choose the evaluation points. This choice is guided by a sampling criterion which combines local search, near promising evaluation results, and global search, in unexplored areas. Most of the criteria proposed over the years, such as the one underlying the classical EGO (for Efficient Global Optimization) algorithm, sample where the optimum is most likely to appear. By contrast, we propose an algorithm, named IAGO for Informational Approach to Global Optimization, which samples where the information gain on the optimizer location is deemed to be highest. The organisation of this dissertation is a direct consequence of the industrial concerns which drove this work. We hope it can be of use to the optimization community, but most of all to practitioners confronted with expensive-toevaluate functions. This is why we insist on the practical use of IAGO for the optimization of functions encountered in actual industrial problems. We also discuss how to handle constraints, noisy evaluation results, multi-objective problems, derivative evaluation results, or significant manufacturing uncertainties.Cette thèse traite d'une question centrale dans de nombreux problèmes d'optimisation, en particulieren ingénierie. Comment optimiser une fonction lorsque le nombre d'évaluations autorisé est très limité au regard de la dimension et de la complexité du problème ? Par exemple, lorsque le budget d'évaluations est limité par la durée des simulations numériques du système à optimiser, il n'est pas rare de devoir optimiser trente paramètres avec moinsde cent évaluations. Ce travail traite d'algorithmes d'optimisation spécifiques à ce contexte pour lequel la plupart des méthodes classiques sont inadaptées.Le principe commun aux méthodes proposées est d'exploiter les propriétés des processus gaussiens et du krigeage pour construire une approximation peu coûteuse de la fonction à optimiser. Cette approximation est ensuite utilisée pour choisir itérativement les évaluations à réaliser. Ce choix est dicté par un critère d'échantillonnage qui combine recherche locale, à proximité des résultats prometteurs, et recherche globale, dans les zones non explorées. La plupart des critères proposés dans la littérature, tel celui de l'algorithme EGO (pour Efficient Global Optimization), cherchent à échantillonner la fonction là où l'apparition d'un optimum est jugée la plus probable. En comparaison, l'algorithme IAGO (pour Informational Approach to Global Optimization), principale contribution de nos travaux, cherche à maximiser la quantité d'information apportée, sur la position de l'optimum, par l'évaluation réalisée. Des problématiques industrielles ont guidé l'organisation de ce mémoire, qui se destine à la communauté de l'optimisationtout comme aux praticiens confrontés à des fonctions à l'évaluation coûteuse. Aussi les applications industrielles y tiennent-elles une place importante tout comme la mise en place de l'algorithme IAGO. Nous détaillons non seulement le cas standard de l'optimisation d'une fonction réelle, mais aussi la prise en compte de contraintes, debruit sur les résultats des évaluations, de résultats d'évaluation du gradient, de problèmes multi-objectifs, ou encore d'incertitudes de fabrication significatives

    Optimisation de fonctions coûteuses

    No full text
    Ce travail de thèse traite d une question centrale à de nombreux problèmes d optimisation, en particulier dans l ingénierie. Comment optimiser une fonction lorsque le budget d évaluations est limité au regard de la dimension et de la complexité du problème ? Ce travail discute d algorithmes d optimisation spécifiques à ce contexte pour lequel la plupart des méthodes classiques sont inadaptées. Le principe commun aux méthodes proposées est d utiliser processus gaussiens et krigeage pour construire une approximation peu coûteuse de la fonction à optimiser. Cette approximation est ensuite utilisée pour choisir itérativement les évaluations à réaliser suivant un critère d échantillonnage. La plupart des critères proposés dans la littérature cherche à échantillonner la fonction là où l apparition d un optimum est la plus probable. En comparaison, l algorithme IAGO (pour Informational Approach to Global Optimization), principale contribution de ces travaux, cherche à maximiser la quantité d information apportée, sur la position de l optimum, par l évaluation réalisée. Les problématiques industrielles ont guidé l organisation de ce mémoire, qui se destine à la communauté de l optimisation tout comme aux praticiens confrontés à des fonctions coûteuses. Aussi les applications industrielles y tiennent-elle une place importante tout comme la mise en place de l algorithme IAGO. Nous détaillons non seulement le cas standard de l optimisation d une fonction réelle, mais aussi, la prise en compte de contraintes, de bruit sur les résultats des évaluations, de résultats d évaluation du gradient, de problèmes multi-objectifs, ou encore d incertitudes de fabrication significatives.This dissertation is driven by a question central to many industrial optimization problems: how to optimize a function when the budget of evaluations is severely limited by either time or cost? For example, when optimization relies on computationally expensive computer simulations taking several hours, the dimension and complexity of the optimization problem may seem irreconcilable with the budget of evaluation. This work discusses the use of optimization algorithms dedicated to this context, which is out of range for most classical methods. The common principle of the methods discussed is to use Gaussian processes and Kriging to build a cheap proxy for the function to be optimized. This approximation is used to choose iteratively the evaluations. Most of the techniques proposed over the years sample where the optimum is most likely to appear. By contrast, we propose an algorithm, named IAGO for Informational Approach to Global Optimization, which samples where the information gain on the optimizer location is deemed to be highest. The organisation of this dissertation is a direct consequence of the industrial concerns which drove this work. We hope it can be of use to the optimization community, but most of all to practitioners confronted with expensive-to evaluate functions. This is why we insist on industrial applications and on the practical use of IAGO for the optimization of a real function, but also when other industrial concerns have to be considered. In particular, we discuss how to handle constraints, noisy evaluation results, multi-objective problems, derivative evaluation results, or significant manufacturing uncertainties.ORSAY-PARIS 11-BU Sciences (914712101) / SudocSudocFranceF

    DOI: 10.1109/CDC.2007.4434190 Identification of expensive-to-simulate parametric models using Kriging and Stepwise Uncertainty Reduction

    No full text
    Abstract — This paper deals with parameter identification for expensive-to-simulate models, and presents a new strategy to address the resulting optimization problem in a context where the budget for simulations is severely limited. Based on Kriging, this approach computes an approximation of the probability distribution of the optimal parameter vector, and selects the next simulation to be conducted so as to optimally reduce the entropy of this distribution. A continuous-time state-space model is used to illustrate the method. I
    corecore